Slack and Margin Rescaling as Convex Extensions of Supermodular Functions
نویسنده
چکیده
Slack and margin rescaling are variants of the structured output SVM. They define convex surrogates to task specific loss functions, which, when specialized to nonadditive loss functions for multi-label problems, yield extensions to increasing set functions. We demonstrate in this paper that we may use these concepts to define polynomial time convex extensions of arbitrary supermodular functions. We further show that slack and margin rescaling can be interpreted as dominating convex extensions over multiplicative and additive families, and that margin rescaling is strictly dominated by slack rescaling. However, we also demonstrate that, while the function value and gradient for margin rescaling can be computed in polynomial time, the same for slack rescaling corresponds to a non-supermodular maximization problem.
منابع مشابه
The Lovász Hinge: A Convex Surrogate for Submodular Losses
Learning with non-modular losses is an important problem when sets of predictions are made simultaneously. The main tools for constructing convex surrogate loss functions for set prediction are margin rescaling and slack rescaling. In this work, we show that these strategies lead to tight convex surrogates iff the underlying loss function is increasing in the number of incorrect predictions. Ho...
متن کاملLearning Submodular Losses with the Lovasz Hinge
Learning with non-modular losses is an important problem when sets of predictions are made simultaneously. The main tools for constructing convex surrogate loss functions for set prediction are margin rescaling and slack rescaling. In this work, we show that these strategies lead to tight convex surrogates iff the underlying loss function is increasing in the number of incorrect predictions. Ho...
متن کاملEntropy and Margin Maximization for Structured Output Learning
We consider the problem of training discriminative structured output predictors, such as conditional random fields (CRFs) and structured support vector machines (SSVMs). A generalized loss function is introduced, which jointly maximizes the entropy and the margin of the solution. The CRF and SSVM emerge as special cases of our framework. The probabilistic interpretation of large margin methods ...
متن کاملMinimizing Discrete Convex Functions with Linear Inequality Constraints
A class of discrete convex functions that can efficiently be minimized has been considered by Murota. Among them are L\-convex functions, which are natural extensions of submodular set functions. We first consider the problem of minimizing an L\-convex function with a linear inequality constraint having a positive normal vector. We propose a polynomial algorithm to solve it based on a binary se...
متن کاملFast and Scalable Structural SVM with Slack Rescaling
We present an efficient method for training slackrescaled structural SVM. Although finding the most violating label in a margin-rescaled formulation is often easy since the target function decomposes with respect to the structure, this is not the case for a slack-rescaled formulation, and finding the most violated label might be very difficult. Our core contribution is an efficient method for f...
متن کامل